Image preprocessing in classification and identification of diabetic eye diseases
- Sarki, Rubina, Ahmed, Khandakar, Wang, Hua, Zhang, Yanchun, Ma, Jiangang, Wang, Kate
- Authors: Sarki, Rubina , Ahmed, Khandakar , Wang, Hua , Zhang, Yanchun , Ma, Jiangang , Wang, Kate
- Date: 2021
- Type: Text , Journal article
- Relation: Data Science and Engineering Vol. 6, no. 4 (2021), p. 455-471
- Full Text:
- Reviewed:
- Description: Diabetic eye disease (DED) is a cluster of eye problem that affects diabetic patients. Identifying DED is a crucial activity in retinal fundus images because early diagnosis and treatment can eventually minimize the risk of visual impairment. The retinal fundus image plays a significant role in early DED classification and identification. An accurate diagnostic model’s development using a retinal fundus image depends highly on image quality and quantity. This paper presents a methodical study on the significance of image processing for DED classification. The proposed automated classification framework for DED was achieved in several steps: image quality enhancement, image segmentation (region of interest), image augmentation (geometric transformation), and classification. The optimal results were obtained using traditional image processing methods with a new build convolution neural network (CNN) architecture. The new built CNN combined with the traditional image processing approach presented the best performance with accuracy for DED classification problems. The results of the experiments conducted showed adequate accuracy, specificity, and sensitivity. © 2021, The Author(s).
- Authors: Sarki, Rubina , Ahmed, Khandakar , Wang, Hua , Zhang, Yanchun , Ma, Jiangang , Wang, Kate
- Date: 2021
- Type: Text , Journal article
- Relation: Data Science and Engineering Vol. 6, no. 4 (2021), p. 455-471
- Full Text:
- Reviewed:
- Description: Diabetic eye disease (DED) is a cluster of eye problem that affects diabetic patients. Identifying DED is a crucial activity in retinal fundus images because early diagnosis and treatment can eventually minimize the risk of visual impairment. The retinal fundus image plays a significant role in early DED classification and identification. An accurate diagnostic model’s development using a retinal fundus image depends highly on image quality and quantity. This paper presents a methodical study on the significance of image processing for DED classification. The proposed automated classification framework for DED was achieved in several steps: image quality enhancement, image segmentation (region of interest), image augmentation (geometric transformation), and classification. The optimal results were obtained using traditional image processing methods with a new build convolution neural network (CNN) architecture. The new built CNN combined with the traditional image processing approach presented the best performance with accuracy for DED classification problems. The results of the experiments conducted showed adequate accuracy, specificity, and sensitivity. © 2021, The Author(s).
Malignant and non-malignant oral lesions classification and diagnosis with deep neural networks
- Liyanage, V.iduni, Tao, Mengqiu, Park, Joon, Wang, Kate, Azimi, Somayyeh
- Authors: Liyanage, V.iduni , Tao, Mengqiu , Park, Joon , Wang, Kate , Azimi, Somayyeh
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Dentistry Vol. 137, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Objectives: Given the increasing incidence of oral cancer, it is essential to provide high-risk communities, especially in remote regions, with an affordable, user-friendly tool for visual lesion diagnosis. This proof-of-concept study explored the utility and feasibility of a smartphone application that can photograph and diagnose oral lesions. Methods: The images of oral lesions with confirmed diagnoses were sourced from oral and maxillofacial textbooks. In total, 342 images were extracted, encompassing lesions from various regions of the oral cavity such as the gingiva, palate, and labial mucosa. The lesions were segregated into three categories: Class 1 represented non-neoplastic lesions, Class 2 included benign neoplasms, and Class 3 contained premalignant/malignant lesions. The images were analysed using MobileNetV3 and EfficientNetV2 models, with the process producing an accuracy curve, confusion matrix, and receiver operating characteristic (ROC) curve. Results: The EfficientNetV2 model showed a steep increase in validation accuracy early in the iterations, plateauing at a score of 0.71. According to the confusion matrix, this model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions was 64% and 80% respectively. Conversely, the MobileNetV3 model exhibited a more gradual increase, reaching a plateau at a validation accuracy of 0.70. The MobileNetV3 model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions, according to the confusion matrix, was 64% and 82% respectively. Conclusions: Our proof-of-concept study effectively demonstrated the potential accuracy of AI software in distinguishing malignant lesions. This could play a vital role in remote screenings for populations with limited access to dental practitioners. However, the discrepancies between the classification of images and the results of "non-malignant lesions" calls for further refinement of the models and the classification system used. Clinical significance: The findings of this study indicate that AI software has the potential to aid in the identification or screening of malignant oral lesions. Further improvements are required to enhance accuracy in classifying non-malignant lesions. © 2023 The Author(s)
- Authors: Liyanage, V.iduni , Tao, Mengqiu , Park, Joon , Wang, Kate , Azimi, Somayyeh
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Dentistry Vol. 137, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Objectives: Given the increasing incidence of oral cancer, it is essential to provide high-risk communities, especially in remote regions, with an affordable, user-friendly tool for visual lesion diagnosis. This proof-of-concept study explored the utility and feasibility of a smartphone application that can photograph and diagnose oral lesions. Methods: The images of oral lesions with confirmed diagnoses were sourced from oral and maxillofacial textbooks. In total, 342 images were extracted, encompassing lesions from various regions of the oral cavity such as the gingiva, palate, and labial mucosa. The lesions were segregated into three categories: Class 1 represented non-neoplastic lesions, Class 2 included benign neoplasms, and Class 3 contained premalignant/malignant lesions. The images were analysed using MobileNetV3 and EfficientNetV2 models, with the process producing an accuracy curve, confusion matrix, and receiver operating characteristic (ROC) curve. Results: The EfficientNetV2 model showed a steep increase in validation accuracy early in the iterations, plateauing at a score of 0.71. According to the confusion matrix, this model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions was 64% and 80% respectively. Conversely, the MobileNetV3 model exhibited a more gradual increase, reaching a plateau at a validation accuracy of 0.70. The MobileNetV3 model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions, according to the confusion matrix, was 64% and 82% respectively. Conclusions: Our proof-of-concept study effectively demonstrated the potential accuracy of AI software in distinguishing malignant lesions. This could play a vital role in remote screenings for populations with limited access to dental practitioners. However, the discrepancies between the classification of images and the results of "non-malignant lesions" calls for further refinement of the models and the classification system used. Clinical significance: The findings of this study indicate that AI software has the potential to aid in the identification or screening of malignant oral lesions. Further improvements are required to enhance accuracy in classifying non-malignant lesions. © 2023 The Author(s)
- «
- ‹
- 1
- ›
- »